18 research outputs found

    Connections, Symbols, and the Meaning of Intelligence

    Get PDF
    More recently, debates in AI have focused on the implications of Connectionism. Connectionism is the hypothesis that distributed computations are capable of instantiating intelligent functions without relying on the representational character of symbols, but rather on the computational states themselves which are cal1~ distributed representations (Haugeland, 1991). This distinction puts connectionism at odds with symbolic theory. The current debates tend to be over which theory will yield intelligent systems--symbolic or connectionist? But as we will soon see, this really amounts to a debate over which representational scheme is required for general intelligence

    Robot Rights? Let's Talk about Human Welfare Instead

    Get PDF
    The 'robot rights' debate, and its related question of 'robot responsibility', invokes some of the most polarized positions in AI ethics. While some advocate for granting robots rights on a par with human beings, others, in a stark opposition argue that robots are not deserving of rights but are objects that should be our slaves. Grounded in post-Cartesian philosophical foundations, we argue not just to deny robots 'rights', but to deny that robots, as artifacts emerging out of and mediating human being, are the kinds of things that could be granted rights in the first place. Once we see robots as mediators of human being, we can understand how the `robots rights' debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting society's least privileged individuals. We conclude that, if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from machinic systems, as well as the lack of taking responsibility by people designing, selling and deploying such machines, remains the most pressing ethical discussion in AI.Comment: Accepted to the AIES 2020 conference in New York, February 2020. The final version of this paper will appear in Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Societ

    The management of acute venous thromboembolism in clinical practice. Results from the European PREFER in VTE Registry

    Get PDF
    Venous thromboembolism (VTE) is a significant cause of morbidity and mortality in Europe. Data from real-world registries are necessary, as clinical trials do not represent the full spectrum of VTE patients seen in clinical practice. We aimed to document the epidemiology, management and outcomes of VTE using data from a large, observational database. PREFER in VTE was an international, non-interventional disease registry conducted between January 2013 and July 2015 in primary and secondary care across seven European countries. Consecutive patients with acute VTE were documented and followed up over 12 months. PREFER in VTE included 3,455 patients with a mean age of 60.8 ± 17.0 years. Overall, 53.0 % were male. The majority of patients were assessed in the hospital setting as inpatients or outpatients (78.5 %). The diagnosis was deep-vein thrombosis (DVT) in 59.5 % and pulmonary embolism (PE) in 40.5 %. The most common comorbidities were the various types of cardiovascular disease (excluding hypertension; 45.5 %), hypertension (42.3 %) and dyslipidaemia (21.1 %). Following the index VTE, a large proportion of patients received initial therapy with heparin (73.2 %), almost half received a vitamin K antagonist (48.7 %) and nearly a quarter received a DOAC (24.5 %). Almost a quarter of all presentations were for recurrent VTE, with >80 % of previous episodes having occurred more than 12 months prior to baseline. In conclusion, PREFER in VTE has provided contemporary insights into VTE patients and their real-world management, including their baseline characteristics, risk factors, disease history, symptoms and signs, initial therapy and outcomes

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Full text link

    On the Origins of the Synthetic Mind: Working Models, Mechanisms, and Simulations

    No full text
    140 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.This dissertation reconsiders the nature of scientific models through an historical study of the development of electronic models of the brain by Cybernetics researchers in the 1940s. By examining how these unique models were used in the brain sciences, it develops the concept of a "working model" for the brain sciences. Working models differ from theoretical models in that they are subject to manipulation and interactive experimentation, i.e. they are themselves objects of study and part of material culture. While these electronic brains are often disparaged by historians as toys and publicity stunts, I argue that they mediated between physiological theories of neurons and psychological theories of behavior so as to leverage their compelling material performances against the lack of observational data and sparse theoretical connections between neurology and psychology. I further argue that working models might be used by cognitive science to better understand how the brain develops performative representations of the world.U of I OnlyRestricted to the U of I community idenfinitely during batch ingest of legacy ETD

    Ethical Decision Making in Robots: Autonomy, Trust and Responsibility

    No full text
    Autonomous robots such as self-driving cars are already able to make decisions that have ethical consequences. As such machines make increasingly complex and important decisions, we will need to know that their decisions are trustworthy and ethically justified. Hence we will need them to be able to explain the reasons for these decisions: ethical decision-making requires that decisions be explainable with reasons.\ud We argue that for people to trust autonomous robots we need to know which ethical principles they are applying and that their application is deterministic and predictable. If a robot is a self-improving, self-learning type of robot whose choices and decisions are based on past experience, which decision it makes in any given situation may not be entirely predictable ahead of time or explainable after the fact. This combination of non-predictability and autonomy may confer a greater degree of responsibility to the machine but it also makes them harder to trust
    corecore